# Large parameter model
Stt Ca Es Conformer Transducer Large
A Catalan-Spanish bilingual automatic speech recognition model based on the NVIDIA Spanish model
Speech Recognition Supports Multiple Languages
S
projecte-aina
1,127
1
Indicconformer Stt Ne Hybrid Ctc Rnnt Large
MIT
IndicConformer is a Conformer-based automatic speech recognition model with hybrid CTC-RNNT architecture, specifically optimized for Nepali language
Speech Recognition Other
I
ai4bharat
36
2
Indicconformer Stt Hi Hybrid Ctc Rnnt Large
MIT
IndicConformer is a Conformer-based automatic speech recognition (ASR) model with a hybrid CTC-RNNT architecture, supporting Hindi speech transcription.
Speech Recognition Other
I
ai4bharat
1,694
3
Phogpt 4B Chat
Bsd-3-clause
PhoGPT is an open-source series of 4-billion-parameter Vietnamese language generation models, including the base pre-trained model PhoGPT-4B and its conversational variant PhoGPT-4B-Chat.
Large Language Model
Transformers Other

P
vinai
3,647
34
Phogpt 4B
Bsd-3-clause
PhoGPT is currently the most advanced 4-billion-parameter Vietnamese language generation model series, including the base pre-trained monolingual model PhoGPT-4B and its conversational variant PhoGPT-4B-Chat.
Large Language Model
Transformers Other

P
vinai
560
12
M2m100 1.2B
MIT
Multilingual translation model fine-tuned on the WMT16 dataset, supporting Russian to English translation tasks
Machine Translation
Transformers

M
kazandaev
23
0
Codefuse 13B
Other
CodeFuse-13B is a 13-billion-parameter code generation model trained on the GPT-NeoX framework, supporting over 40 programming languages and capable of processing code sequences up to 4096 characters long.
Large Language Model
Transformers

C
codefuse-ai
31
49
Llama 2 70b Fb16 Korean
A version fine-tuned on Korean datasets based on the Llama2 70B model, focusing on Korean and English text generation tasks
Large Language Model
Transformers Supports Multiple Languages

L
quantumaikr
127
37
Mgpt 1.3B Persian
MIT
A 1.3 billion parameter language model optimized for Persian, fine-tuned based on mGPT-XL (1.3B)
Large Language Model
Transformers Supports Multiple Languages

M
ai-forever
84
11
Palmyra Med 20b
Apache-2.0
Palmyra-Med-20b is a large language model in the medical field with 20 billion parameters. It is fine-tuned on a medical dataset based on Palmyra-Large and is good at medical dialogue and Q&A tasks.
Large Language Model
Transformers English

P
Writer
4,936
35
Replit
replit-code-v1-3b is a 2.7B parameter causal language model focused on code completion, developed by Replit, Inc.
Large Language Model
Transformers Other

R
lentan
60
3
Convnextv2 Huge.fcmae
A self-supervised feature representation model based on ConvNeXt-V2, pre-trained using the Fully Convolutional Masked Autoencoder (FCMAE) framework, suitable for image classification and feature extraction tasks.
Image Classification
Transformers

C
timm
52
0
Stt Zh Conformer Transducer Large
This is a large Conformer-Transducer model for transcribing Mandarin speech, with approximately 120 million parameters, trained on the AISHELL-2 dataset.
Speech Recognition Chinese
S
nvidia
72
13
Codeparrot
CodeParrot is a Python code generation model based on the GPT-2 architecture (1.5 billion parameters), focusing on automatic Python code generation.
Large Language Model
Transformers Other

C
codeparrot
1,342
105
Wav2vec2 Xls R 1b Korean
Apache-2.0
This model is a Korean automatic speech recognition model fine-tuned on the KRESNIK/ZEROTH_KOREAN - CLEAN dataset based on facebook/wav2vec2-xls-r-1b
Speech Recognition
Transformers Korean

W
anantoj
20
2
Wav2vec2 Xls R 1b En To 15
Apache-2.0
Facebook's Wav2Vec2 XLS-R model fine-tuned for speech translation tasks, supporting translation from English to 15 target languages.
Speech Recognition
Transformers Supports Multiple Languages

W
facebook
505
3
Wav2vec2 Large Xls R 1b Indonesian
Apache-2.0
An automatic speech recognition model fine-tuned on the Common Voice Indonesian dataset based on facebook/wav2vec2-xls-r-1b
Speech Recognition
Transformers Other

W
kingabzpro
14
1
Rut5 Large
ruT5-large is a Russian text-to-text generation model developed by the SberDevices team, based on the T5 architecture with 737 million parameters.
Large Language Model
Transformers Other

R
ai-forever
1,049
36
Wav2vec2 Xls R 2b En To 15
Apache-2.0
Facebook's Wav2Vec2 XLS-R model, fine-tuned for speech translation tasks in 15 languages, capable of translating spoken English into multiple written languages.
Speech Recognition
Transformers Supports Multiple Languages

W
facebook
27
1
Wav2vec2 Xlsr 1b Ru
A Russian automatic speech recognition model fine-tuned on the Common Voice dataset based on facebook/wav2vec2-xls-r-1b
Speech Recognition
Transformers Other

W
RASMUS
41
2
Featured Recommended AI Models